Agentic search tuning: faster and better
Stavros Macrakis and Daniel Wrigley • Location: TUECHTIG • Back to Haystack EU 2024
Getting good results from a search engine is hard. Too hard. We all know that the virtuous circle of search algorithms, then search measurement and analysis, then search tuning, and back to search algorithms. New algorithms are available from search vendor; and there are more and more tools for measurement and analysis (OpenSearch UBI, OpenSource Connections Quepid, OpenSearch Search Relevance Workbench). But experimentation is slow and tuning is still manual.
Now, though, by taking advantage of LLM-based agents combined with interleaved A/B testing, we can automate the process, making it faster and more accurate. Building on an agentic infrastructure, we create a collection of agents, each specialized in a particular business problem and incorporating a variety of search strategies. The agents not only create tests and evaluate them, but also orchestrate their deployment.

Stavros Macrakis
OpenSearch @ AWSStavros Macrakis is the product manager for OpenSearch focusing on document and e-commerce search. He has worked on search for almost 20 years and is passionate about search relevance in multiple contexts: Web search (Lycos), enterprise search vendor (FAST), specialized internal search (GLG), flight search (Google), etc.

Daniel Wrigley
OpenSource ConnectionsDaniel is a Search Consultant at OpenSource Connections. He has worked in search since graduating in computational linguistics studies at Ludwig-Maximilians-University Munich in 2012 where he developed his weakness for search and natural language processing. His experience as a search consultant paved the way for becoming an O’Reilly author co-authoring the first German book on Apache Solr. He is an active contributor to open source projects and is one of the maintainers of the Elasticsearch Learning to Rank plugin. Early in 2025 he founded a new meetup in Munich that has its focus around advancements in modern search developments.